4 - 26.4. Naive Bayes Models [ID:30391]
50 von 82 angezeigt

Then there's still things you can do.

You can do something called Naive Bayes models.

But that's one of the most commonly used ideas here.

So we have a... we just basically say, oh, who needs these complex networks if we can

just approximate the world by a simple network?

We have a... where we just basically have a single cause that influences directly a

number of effects.

Essentially all Bayesian networks look like that.

In some areas, that's even reasonable.

If you do diagnosis of technical systems, we often apply something which is called the

single fault hypothesis.

But usually, if your transistor radio breaks, then there's one problem with it.

Just having two problems that are unrelated simultaneously is not that likely.

And there are a lot of effects.

Your transistor radio starts to make funny noises, smell not so nice as that, get very

warm.

Usually some kind of a single transistor broke and then it gets warm because there's electricity

in it.

Smoke, because it gets really warm at a little place and then it makes funny noises.

Or no noises or whatever.

So if you actually kind of always assume Bayesian networks that look like that, then you can

then you have what's called a naive Bayesian model.

Or the people who propose this model, they call them Bayesian classifiers and the people

who believe that the world is more complex like this one, they call them not naive Bayes

models but idiot Bayes models because they're so much easier.

But they work for certain situations and if you're aware that you're applying the single

cause hypothesis, then that can work surprisingly well.

The thing here is really you have a naive Bayes model down here.

If you kind of get rid of that part and just stick it all into this cause variable, then

that actually works and that's the reason.

So our dentistry example, remember where we had the dentist looking for cavities and there

were toothaches and this funny hook catching and those kind of things.

We only had one cause, possible cause, namely cavities.

That's a naive Bayes model, which is one of the reasons it was so whirly, because a naive

Bayes makes things extremely simple.

And so we get something very simple with our methods.

We only need the, we have one cause or the thing we want to classify and a variety of

observed values, then we get this classic equation again and we can then look at the

most likely class, which is essentially the most likely hypothesis.

That gives you a learning technique that for instance in the restaurant example works quite

well, not quite as well as decision trees, but with lots less a priori knowledge you

want, need to take into account and it's not that terrible.

So there are a lot of kind of statistical methods, which essentially have been around

before Bayesian networks and kind of can be subsumed into Bayesian networks and give you

things that given Bayesian networks will actually, you can explain easily to yourself.

For instance, naive Bayesian models or Bayesian classifiers is just basically only taking

the lower half of these kind of networks.

You can always do that.

What you do is if you have a network of that form, you basically take all of that, condense

it into a single variable and you have a naive Bayesian network.

Teil eines Kapitels:
Chapter 26. Statistical Learning

Zugänglich über

Offener Zugang

Dauer

00:11:19 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 17:36:39

Sprache

en-US

Definition of Naive Bayes Models and an explanation how they can be used for learning. Also, a summary for this chapter is given. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen